Bayesian covariance matrix estimation using a mixture of decomposable graphical models
نویسندگان
چکیده
Estimating a covariance matrix efficiently and discovering its structure are important statistical problems with applications in many fields. This article takes a Bayesian approach to estimate the covariance matrix of Gaussian data. We use ideas from Gaussian graphical models and model selection to construct a prior for the covariance matrix that is a mixture over all decomposable graphs, where a graph means the configuration of nonzero offdiagonal elements in the inverse of the covariance matrix. Our prior for the covariance matrix is such that the probability of each graph size is specified by the user and graphs of equal size are assigned equal probability. Most previous approaches assume that all graphs are equally probable. We give empirical results that show the prior that assigns equal probability over graph sizes outperforms the prior that assigns equal probability over all graphs, both in identifying the correct decomposable graph and in more efficiently estimating the covariance matrix. The advantage is greatest when the number of observations is small relative to the dimension of the covariance matrix. Our method requires the number of decomposable graphs for each graph size. We show how to estimate these numbers using simulation and that the simulation results agree with analytic results when such results are known. We also show how to estimate the posterior distribution of the covariance matrix using Markov chain Monte Carlo with the elements of the covariance matrix integrated out and give empirical results that show the sampler is much more efficient than current methods. The article also shows empirically that there is minimal change in statistical efficiency in using the mixture over decomposable graphs prior for estimating a general covariance compared to the Bayesian estimator by Wong et al. (2003), even when the graph of the covariance matrix is nondecomposable. However, our approach has some important computational advantages over that of Wong et al. (2003). Finally, we note that both the prior and the simulation method to evaluate the prior apply generally to any decomposable graphical model.
منابع مشابه
Speech Enhancement Using Gaussian Mixture Models, Explicit Bayesian Estimation and Wiener Filtering
Gaussian Mixture Models (GMMs) of power spectral densities of speech and noise are used with explicit Bayesian estimations in Wiener filtering of noisy speech. No assumption is made on the nature or stationarity of the noise. No voice activity detection (VAD) or any other means is employed to estimate the input SNR. The GMM mean vectors are used to form sets of over-determined system of equatio...
متن کاملFlexible Covariance Estimation in Graphical Gaussian Models
In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the WPG family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278–1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The WPG family includes the classical inverse of t...
متن کاملFlexible Covariance Estimation in Graphical Gaussian Models by Bala Rajaratnam,
In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the WPG family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278–1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The WPG family includes the classical inverse of t...
متن کاملMixture Modeling, Sparse Covariance Estimation and Parallel Computing in Bayesian Analysis
Mixture Modeling, Sparse Covariance Estimation and Parallel Computing in Bayesian Analysis
متن کاملBayesian inference for Gaussian graphical models beyond decomposable graphs
Bayesian inference for graphical models has received much attention in the literature in recent years. It is well known that when the graph G is decomposable, Bayesian inference is significantly more tractable than in the general non-decomposable setting. Penalized likelihood inference on the other hand has made tremendous gains in the past few years in terms of scalability and tractability. Ba...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Statistics and Computing
دوره 19 شماره
صفحات -
تاریخ انتشار 2009